15,584 research outputs found

    Investigating invariant item ordering in the Mental Health Inventory : an illustration of the use of different methods

    Get PDF
    Invariant item ordering is a property of scales whereby the items are scored in the same order across a wide range of the latent trait and across a wide range of respondents. In the package ‘mokken’ in the statistical software R, the ability to analyse Mokken scales for invariant item ordering has recently been available and techniques for inspecting visually the item response curves of item pairs, have also been included. While methods to assess invariant item ordering are available, there have been indications that items representing extremes of distress in mental well-being scales, such as suicidal ideation, may lead to claiming invariant item ordering where it does not exist. We used the Mental Health Inventory to see if invariant item ordering was indicated in any Mokken scales derived and to see if this was being influenced by extreme items. A Mokken scale was derived indicating invariant item ordering. Visual inspection of the item pairs indicated that the most difficult item (suicidal ideation) was located far from the remaining cluster of items. Removing this item lowered invariant item ordering to an unacceptable level

    Violations of local stochastic independence exaggerate scalability in Mokken scaling analysis of the Chinese Mandarin SF-36

    Get PDF
    Background: Previous work using Mokken scaling analysis with the SF-36 has found subscales appearing to show excellent Mokken scaling properties. However, the values of scalability of the subscales are very large, raising the possibility that these are artificially high and this may result from violations of local stochastic independence between items. Objectives: To analyse selected items from the Chinese Mandarin form of the SF-36 scale using Mokken scaling and to investigate if violations of local stochastic independence exaggerate scalability. Methods: Exploratory Mokken scaling analysis was run using the online public domain software R by entering 19 items from the Chinese Mandarin form of the SF-36 items into the analysis. The items in the resulting scales, judged by the size of Loevinger’s coefficient, were analysed for violations of monotony, 95% confidence intervals and invariant item ordering, including inspection of item pair plots. Results: Two Mokken scales were obtained, one including items from the Physical Functioning subscale, and one including items from the Mental Health subscale of the Chinese Mandarin form of the SF-36. The Physical Functioning scale was very strong according to Loevinger’s coefficient with high invariant item ordering; the Mental Health scale was moderately strong with weak invariant item ordering. Conclusion: The strength of the Physical Functioning Mokken scale derived from the Chinese Mandarin form of the SF-36 is probably the result of an item chain and item overlap which violate local stochastic independence. This is due to the nature of the items in the Physical Functioning subscale, all of which relate to physical ability and some of which can only be achieved if previous items in the subscale have been achieved

    Cardiac Depression Scale: Mokken scaling in heart failure patients

    Get PDF
    Background: There is a high prevalence of depression in patients with heart failure (HF) that is associated with worsening prognosis. The value of using a reliable and valid instrument to measure depression in this population is therefore essential. We validated the Cardiac Depression Scale (CDS) in heart failure patients using a model of ordinal unidimensional measurement known as Mokken scaling. Findings: We administered in face-to-face interviews the CDS to 603 patients with HF. Data were analysed using Mokken scale analysis. Items of the CDS formed a statistically significant unidimensional Mokken scale of low strength (H<0.40) and high reliability (Rho>0.8). Conclusions: The CDS has a hierarchy of items which can be interpreted in terms of the increasingly serious effects of depression occurring as a result of HF. Identifying an appropriate instrument to measure depression in patients with HF allows for early identification and better medical management. Keywords: Cardiac Depression Scale, Heart failure, Depression, Mokken scalin

    Construction of a consistent market price data base for a general equilibrium model of China

    Get PDF

    A ship-based methodology for high precision atmospheric oxygen measurements and its application in the Southern Ocean region

    Get PDF
    A method for achieving continuous high precision measurements of atmospheric O-2 is presented based on a commercially available fuel-cell instrument, (Sable Systems, Oxzilla FC-II) with a precision of 7 per meg (approximately equivalent to 1.2 ppm) for a 6-min measurement. The Oxzilla was deployed on two voyages in the Western Pacific sector of the Southern Ocean, in February 2003 and in April 2004, making these the second set of continuous O-2 measurements ever made from a ship. The results show significant temporal variation in O-2, in the order of +/- 10 per meg over 6-hourly time intervals, and substantial spatial variation. Data from both voyages show an O-2 maximum centred on 50 degrees S, which is most likely to be the result of biologically driven O-2 outgassing in the region of subtropical convergence around New Zealand, and a decreasing O-2 trend towards Antarctica. O-2 from the ship-based measurements is elevated compared with measurements from the Scripps Institution of Oceanography flask-sampling network, and the O-2 maximum is also not captured in the network observations. This preliminary study shows that ship-based continuous measurements are a valuable addition to current fixed site sampling programmes for the understanding of ocean-atmosphere O-2 exchange processes. [References: 39

    Adaptive Sampling of Time Series During Remote Exploration

    Get PDF
    This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models are stationary, e.g., the covariance relationships are time-invariant. In such cases, information gain is independent of previously collected data, and the optimal solution can always be computed in advance. Information-optimal sampling of a stationary GP time series thus reduces to even spacing, and such models are not appropriate for tracking localized anomalies. Additionally, GP model inference can be computationally expensive

    Flightspeed Integral Image Analysis Toolkit

    Get PDF
    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicle

    Fast Image Texture Classification Using Decision Trees

    Get PDF
    Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy

    Pancreatic cancer patient survival correlates with DNA methylation of pancreas development genes.

    Get PDF
    DNA methylation is an epigenetic mark associated with regulation of transcription and genome structure. These markers have been investigated in a variety of cancer settings for their utility in differentiating normal tissue from tumor tissue. Here, we examine the direct correlation between DNA methylation and patient survival. We find that changes in the DNA methylation of key pancreatic developmental genes are strongly associated with patient survival
    corecore